-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Frontend][Feature] Add jamba tool parser #9154
[Frontend][Feature] Add jamba tool parser #9154
Conversation
…kip_special_tokens to False as done in Internlm2ToolParser
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
This reverts commit 16542bc.
@K-Mistele do you have time to review this? |
Yes, happy to take a look! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems reasonable to me but I will wait on Kyle to review
# TODO: edit comment | ||
# use a regex to find the tool call between the tags |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this TODO done?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Removed it now.
Thanks for catching this
…o add-jamba-tool-parser
Ping? Anything I can do to help this getting merged? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good! just a couple thoughts:
- Right now, tool use tests are already standardized across models in
tests/tool_use
, you can add a config for Jamba invllm/tests/tool_use/utils.py
so that the same tests that are run on other models are run on it. This is preferred since it keeps tool testing standardized to make sure that all models pass the same tests, rather than defining custom tests for each model - Please update the docs at
docs/source/serving/openai_compatible_server.md
with the relevant information for jamba - you'll see that Llama 3.1, Hermes, Mistral, and InternLM already have entries here, so you should be able to use the same template.
|
Sounds good! I didn't realize the model was so big. I will defer to @mgoin @DarkLight1337 then. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for your patience!
Signed-off-by: charlifu <[email protected]>
Signed-off-by: Vinay Damodaran <[email protected]>
Signed-off-by: Alvant <[email protected]>
Signed-off-by: Amit Garg <[email protected]>
Signed-off-by: qishuai <[email protected]>
Signed-off-by: Sumit Dubey <[email protected]>
Add
JambaToolParser
to support tool calling for ai21labs/AI21-Jamba-1.5-Mini and ai21labs/AI21-Jamba-1.5-Large